118 research outputs found
A Fault Tolerant, Dynamic and Low Latency BDII Architecture for Grids
The current BDII model relies on information gathering from agents that run
on each core node of a Grid. This information is then published into a Grid
wide information resource known as Top BDII. The Top level BDIIs are updated
typically in cycles of a few minutes each. A new BDDI architecture is proposed
and described in this paper based on the hypothesis that only a few attribute
values change in each BDDI information cycle and consequently it may not be
necessary to update each parameter in a cycle. It has been demonstrated that
significant performance gains can be achieved by exchanging only the
information about records that changed during a cycle. Our investigations have
led us to implement a low latency and fault tolerant BDII system that involves
only minimal data transfer and facilitates secure transactions in a Grid
environment.Comment: 18 pages; 10 figures; 4 table
A deep reinforcement learning based homeostatic system for unmanned position control
Deep Reinforcement Learning (DRL) has been proven to be capable of designing an optimal control theory by minimising the error in dynamic systems. However, in many of the real-world operations, the exact behaviour of the environment is unknown. In such environments, random changes cause the system to reach different states for the same action. Hence, application of DRL for unpredictable environments is difficult as the states of the world cannot be known for non-stationary transition and reward functions. In this paper, a mechanism to encapsulate the randomness of the environment is suggested using a novel bio-inspired homeostatic approach based on a hybrid of Receptor Density Algorithm (an artificial immune system based anomaly detection application) and a Plastic Spiking Neuronal model. DRL is then introduced to run in conjunction with the above hybrid model. The system is tested on a vehicle to autonomously re-position in an unpredictable environment. Our results show that the DRL based process control raised the accuracy of the hybrid model by 32%.N/
Towards In-Transit Analytics for Industry 4.0
Industry 4.0, or Digital Manufacturing, is a vision of inter-connected
services to facilitate innovation in the manufacturing sector. A fundamental
requirement of innovation is the ability to be able to visualise manufacturing
data, in order to discover new insight for increased competitive advantage.
This article describes the enabling technologies that facilitate In-Transit
Analytics, which is a necessary precursor for Industrial Internet of Things
(IIoT) visualisation.Comment: 8 pages, 10th IEEE International Conference on Internet of Things
(iThings-2017), Exeter, UK, 201
Bulk Scheduling with the DIANA Scheduler
Results from the research and development of a Data Intensive and Network
Aware (DIANA) scheduling engine, to be used primarily for data intensive
sciences such as physics analysis, are described. In Grid analyses, tasks can
involve thousands of computing, data handling, and network resources. The
central problem in the scheduling of these resources is the coordinated
management of computation and data at multiple locations and not just data
replication or movement. However, this can prove to be a rather costly
operation and efficient sing can be a challenge if compute and data resources
are mapped without considering network costs. We have implemented an adaptive
algorithm within the so-called DIANA Scheduler which takes into account data
location and size, network performance and computation capability in order to
enable efficient global scheduling. DIANA is a performance-aware and
economy-guided Meta Scheduler. It iteratively allocates each job to the site
that is most likely to produce the best performance as well as optimizing the
global queue for any remaining jobs. Therefore it is equally suitable whether a
single job is being submitted or bulk scheduling is being performed. Results
indicate that considerable performance improvements can be gained by adopting
the DIANA scheduling approach.Comment: 12 pages, 11 figures. To be published in the IEEE Transactions in
Nuclear Science, IEEE Press. 200
An Architecture for Integrated Intelligence in Urban Management using Cloud Computing
With the emergence of new methodologies and technologies it has now become
possible to manage large amounts of environmental sensing data and apply new
integrated computing models to acquire information intelligence. This paper
advocates the application of cloud capacity to support the information,
communication and decision making needs of a wide variety of stakeholders in
the complex business of the management of urban and regional development. The
complexity lies in the interactions and impacts embodied in the concept of the
urban-ecosystem at various governance levels. This highlights the need for more
effective integrated environmental management systems. This paper offers a
user-orientated approach based on requirements for an effective management of
the urban-ecosystem and the potential contributions that can be supported by
the cloud computing community. Furthermore, the commonality of the influence of
the drivers of change at the urban level offers the opportunity for the cloud
computing community to develop generic solutions that can serve the needs of
hundreds of cities from Europe and indeed globally.Comment: 6 pages, 3 figure
Blockchain standards for compliance and trust
Blockchain methods are emerging as practical tools for validation, record-keeping, and access control in addition to their early applications in cryptocurrency. This column explores the options for use of blockchains to enhance security, trust, and compliance in a variety of industry settings and explores the current state of blockchain standards.N/
Representing variant calling format as directed acyclic graphs to enable the use of cloud computing for efficient and cost effective genome analysis
Ever since the completion of the Human Genome Project in 2003, the human genome has been represented as a linear sequence of 3.2 billion base pairs and is referred to as the "Reference Genome". Since then it has become easier to sequence genomes of individuals due to rapid advancements in technology, which in turn has created a need to represent the new information using a different representation. Several attempts have been made to represent the genome sequence as a graph albeit for different purposes. Here we take a look at the Variant Calling Format (VCF) file which carries information about variations within genomes and is the primary format of choice for genome analysis tools. This short paper aims to motivate work in representing the VCF file as Directed Acyclic Graphs (DAGs) to run on a cloud in order to exploit the high performance capabilities provided by cloud computing.N/
A fault tolerant, dynamic and low latency BDII architecture for grids
The current BDII model relies on information gathering from agents that run on each core node of a Grid. This information is then published into a Grid wide information resource known as Top BDII. The Top level BDIIs are updated typically in cycles of a few minutes each. A new BDDI architecture is proposed and described in this paper based on the hypothesis that only a few attribute values change in each BDDI information cycle and consequently it may not be necessary to update each parameter in a cycle. It has been demonstrated that significant performance gains can be achieved by exchanging only the information about records that changed during a cycle. Our investigations have led us to implement a low latency and fault tolerant BDII system that involves only minimal data transfer and facilitates secure transactions in a Grid environment
- …